快速功能提取(FFX)是用于解决符号回归问题的确定性算法。我们通过将参数添加到非线性函数的参数中提高了FFX的准确性。我们不仅可以优化线性参数,还使用可分离的非线性最小二乘优化优化了这些附加的非线性参数,使用变量投影算法优化。FFX和我们的新算法都应用于PenNML基准套件。我们表明,提议的FFX扩展可以提高准确性,同时提供相似长度的模型,并且在给定数据上的运行时仅增加了运行时。将我们的结果与已经为给定基准套件发布的大量回归方法进行了比较。
translated by 谷歌翻译
基于原子量表的材料建模在新材料的发展及其特性的理解中起着重要作用。粒子模拟的准确性由原子间电位确定,该电位允许计算原子系统的势能作为原子坐标和潜在的其他特性的函数。基于原理的临界电位可以达到任意水平的准确性,但是它们的合理性受其高计算成本的限制。机器学习(ML)最近已成为一种有效的方法,可以通过用经过电子结构数据培训的高效替代物代替昂贵的模型来抵消Ab始于原子电位的高计算成本。在当前大量方法中,符号回归(SR)正在成为一种强大的“白盒”方法,以发现原子质潜力的功能形式。这项贡献讨论了符号回归在材料科学(MS)中的作用,并对当前的方法论挑战和最新结果提供了全面的概述。提出了一种基于遗传编程的方法来建模原子能(由原子位置和相关势能的快照组成),并在从头算电子结构数据上进行了经验验证。
translated by 谷歌翻译
多目标符号回归具有优点:虽然学习模型的准确性最大化,但复杂性自动调整,不需要指定a-priori。优化的结果不再是单一解决方案,而是整个帕累托 - 前面描述了准确性和复杂性之间的权衡。在这一贡献中,我们研究了在使用NSGA-II进行多目标优化时,在象征性回归中最适当地使用哪些复杂性度量。此外,我们提出了一种新的复杂性度量,包括基于模型中发生的函数符号的语义信息,并在几个基准数据集中测试其效果。结果比较多种复杂度措施的实现准确性和模型长度来呈现,以说明算法的搜索方向如何受到影响。
translated by 谷歌翻译
在材料科学中,衍生模型以预测突出材料特性(例如弹性,强度,电导率)及其与加工条件的关系。主要缺点是校准依赖于处理条件的模型参数。目前,必须优化这些参数以拟合测量数据,因为它们与处理条件(例如变形温度,应变率)的关系不完全理解。我们提出了一种新的方法,该方法识别了基于遗传编程的处理条件的校准参数的功能依赖性。我们提出了两个(显式和隐式)方法来识别这些依赖项并生成短暂的可解释表达式。该方法用于扩展基于物理的组成型模型以进行变形过程。该本结构型模型与内部材料变量(例如位错密度)进行操作,并且包含许多参数,其中包括三个校准参数。衍生的表达式扩展了本组件模型并替换校准参数。因此,启用各种处理参数之间的插值。我们的研究结果表明,隐式方法比明确的方法更昂贵,但也产生明显更好的结果。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译